Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
We design a system that learns how to edit visual programs. Our edit network consumes a complete input program and a visual target. From this input, we task our network with predicting a local edit operation that could be applied to the input program to improve its similarity to the target. In order to apply this scheme for domains that lack program annotations, we develop a self-supervised learning approach that integrates this edit network into a bootstrapped finetuning loop along with a network that predicts entire programs in one-shot. Our joint finetuning scheme, when coupled with an inference procedure that initializes a population from the one-shot model and evolves members of this population with the edit network, helps to infer more accurate visual programs. Over multiple domains, we experimentally compare our method against the alternative of using only the one-shot model, and find that even under equal search-time budgets, our editing-based paradigm provides significant advantages.more » « less
-
The ability to edit 3D assets with natural language presents a compelling paradigm to aid in the democratization of 3D content creation. However, while natural language is often effective at communicating general intent, it is poorly suited for specifying exact manipulation. To address this gap, we introduce ParSEL, a system that enablescontrollableediting of high-quality 3D assets with natural language. Given a segmented 3D mesh and an editing request, ParSEL produces aparameterizedediting program. Adjusting these parameters allows users to explore shape variations with exact control over the magnitude of the edits. To infer editing programs which align with an input edit request, we leverage the abilities of large-language models (LLMs). However, we find that although LLMs excel at identifying the initial edit operations, they often fail to infer complete editing programs, resulting in outputs that violate shape semantics. To overcome this issue, we introduce Analytical Edit Propagation (AEP), an algorithm which extends a seed edit with additional operations until a complete editing program has been formed. Unlike prior methods, AEP searches for analytical editing operations compatible with a range of possible user edits through the integration of computer algebra systems for geometric analysis. Experimentally, we demonstrate ParSEL's effectiveness in enabling controllable editing of 3D objects through natural language requests over alternative system designs.more » « less
-
People grasp flexible visual concepts from a few examples. We explore a neurosymbolic system that learns how to infer programs that capture visual concepts in a domain-general fashion. We introduce Template Programs: programmatic expressions from a domain-specific language that specify structural and parametric patterns common to an input concept. Our framework supports multiple concept-related tasks, including few-shot generation and co-segmentation through parsing. We develop a learning paradigm that allows us to train networks that infer Template Programs directly from visual datasets that contain concept groupings. We run experiments across multiple visual domains: 2D layouts, Omniglot characters, and 3D shapes. We find that our method outperforms task-specific alternatives, and performs competitively against domain-specific approaches for the limited domains where they exist.more » « less
-
Abstract The ecological, evolutionary, economic, and cultural importance of algae necessitates a continued integration of phycological research, education, outreach, and engagement. Here, we comment on several topics discussed during a networking workshop—Algae and the Environment—that brought together phycological researchers from a variety of institutions and career stages. We share some of our perspectives on the state of phycology by examining gaps in teaching and research. We identify action areas where we urge the phycological community to prepare itself to embrace the rapidly changing world. We emphasize the need for more trained taxonomists as well as integration with molecular techniques, which may be expensive and complicated but are important. An essential benefit of these integrative studies is the creation of high‐quality algal reference barcoding libraries augmented with morphological, physiological, and ecological data that are important for studies of systematics and crucial for the accuracy of the metabarcoding bioassessment. We highlight different teaching approaches for engaging undergraduate students in algal studies and the importance of algal field courses, forays, and professional phycological societies in supporting the algal training of students, professionals, and citizen scientists.more » « less
-
Abstract The Vera C. Rubin Legacy Survey of Space and Time will discover thousands of microlensing events across the Milky Way, allowing for the study of populations of exoplanets, stars, and compact objects. We evaluate numerous survey strategies simulated in the Rubin Operation Simulations to assess the discovery and characterization efficiencies of microlensing events. We have implemented three metrics in the Rubin Metric Analysis Framework: a discovery metric and two characterization metrics, where one estimates how well the light curve is covered and the other quantifies how precisely event parameters can be determined. We also assess the characterizability of microlensing parallax, critical for detection of free-floating black hole lenses. We find that, given Rubin’s baseline cadence, the discovery and characterization efficiency will be higher for longer-duration and larger-parallax events. Microlensing discovery efficiency is dominated by the observing footprint, where more time spent looking at regions of high stellar density, including the Galactic bulge, Galactic plane, and Magellanic Clouds, leads to higher discovery and characterization rates. However, if the observations are stretched over too wide an area, including low-priority areas of the Galactic plane with fewer stars and higher extinction, event characterization suffers by >10%. This could impact exoplanet, binary star, and compact object events alike. We find that some rolling strategies (where Rubin focuses on a fraction of the sky in alternating years) in the Galactic bulge can lead to a 15%–20% decrease in microlensing parallax characterization, so rolling strategies should be chosen carefully to minimize losses.more » « less
-
Worldwide, enhancement of oyster populations is undertaken to achieve a variety of goals including support of food production, local economies, water quality, coastal habitat, biodiversity, and cultural heritage. Although numerous strategies for improving oyster stocks exist, enhancement efforts can be thwarted by longstanding conflict among community groups about which strategies to implement, where efforts should be focused, and how much funding should be allocated to each strategy. The objective of this paper is to compare two engagement approaches that resulted in recommendations for multi-benefit enhancements to oyster populations and the oyster industry in Maryland, U.S.A., using the Consensus Solutions process with collaborative simulation modeling. These recommendations were put forward by the OysterFutures Workgroup in 2018 and the Maryland Oyster Advisory Commission (OAC) in 2021. Notable similarities between the efforts were the basic principles of the Consensus Solutions process: neutral facilitation, a 75% agreement threshold, the presence of management agency leadership at the meetings, a scientific support team that created a management scenario model in collaboration with community group representatives, numerous opportunities for representatives to listen to each other, and a structured consensus building process for idea generation, rating, and approval of management options. To ensure meaningful representation by the most affected user groups, the goal for membership composition was 60% from industry and 40% from advocacy, agency, and academic groups in both processes. Important differences between the processes included the impetus for the process (a research program versus a legislatively-mandated process), the size of the groups, the structure of the meetings, and the clear and pervasive impact of the COVID-19 pandemic on the ability of OAC members to interact. Despite differences and challenges, both groups were able to agree on a package of recommendations, indicating that consensus-based processes with collaborative modeling offer viable paths toward coordinated cross-sector natural resource decisions with scientific basis and community support. In addition, collaborative modeling resulted in ‘myth busting’ findings that allowed participants to reassess and realign their thinking about how the coupled human oyster system would respond to management changes.more » « less
-
We introduce ShapeCoder, the first system capable of taking a dataset of shapes, represented with unstructured primitives, and jointly discovering (i) usefulabstractionfunctions and (ii) programs that use these abstractions to explain the input shapes. The discovered abstractions capture common patterns (both structural and parametric) across a dataset, so that programs rewritten with these abstractions are more compact, and suppress spurious degrees of freedom. ShapeCoder improves upon previous abstraction discovery methods, finding better abstractions, for more complex inputs, under less stringent input assumptions. This is principally made possible by two methodological advancements: (a) a shape-to-program recognition network that learns to solve sub-problems and (b) the use of e-graphs, augmented with a conditional rewrite scheme, to determine when abstractions with complex parametric expressions can be applied, in a tractable manner. We evaluate ShapeCoder on multiple datasets of 3D shapes, where primitive decompositions are either parsed from manual annotations or produced by an unsupervised cuboid abstraction method. In all domains, ShapeCoder discovers a library of abstractions that captures high-level relationships, removes extraneous degrees of freedom, and achieves better dataset compression compared with alternative approaches. Finally, we investigate how programs rewritten to use discovered abstractions prove useful for downstream tasks.more » « less
-
We present SHRED, a method for 3D SHape REgion Decomposition. SHRED takes a 3D point cloud as input and uses learned local operations to produce a segmentation that approximates fine-grained part instances. We endow SHRED with three decomposition operations: splitting regions, fixing the boundaries between regions, and merging regions together. Modules are trained independently and locally, allowing SHRED to generate high-quality segmentations for categories not seen during training. We train and evaluate SHRED with fine-grained segmentations from PartNet; using its merge-threshold hyperparameter, we show that SHRED produces segmentations that better respect ground-truth annotations compared with baseline methods, at any desired decomposition granularity. Finally, we demonstrate that SHRED is useful for downstream applications, out-performing all baselines on zero-shot fine-grained part instance segmentation and few-shot finegrained semantic segmentation when combined with methods that learn to label shape regions.more » « less
An official website of the United States government

Full Text Available